50 research outputs found

    LightSense: enabling spatially aware handheld interaction devices

    Full text link
    devices. The outside-in approach tracks the light source and streams the data to the phone over Bluetooth. a) A wall-mounted map with embedded light sensors provides hotspot tracking. b) A table-top setup tracks the phone with a camera through a diffused glass surface. c) The spatially aware device augments a physical map with a detailed interactive road map of the area of interest. The vision of spatially aware handheld interaction devices has been hard to realize. The difficulties in solving the general tracking problem for small devices have been addressed by several research groups and examples of issues are performance, hardware availability and platform independency. We present Light-Sense, an approach that employs commercially available components to achieve robust tracking of cell phone LEDs, without any modifications to the device. Cell phones can thus be promoted to interaction and display devices in ubiquitous installations of systems such as the ones we present here. This could enable a new generation of spatially aware handheld interaction devices that would unobtrusively empower and assist us in our everyday tasks. CR Categories: H.5.1 [Multimedia Information Systems]: Artificial, augmented, and virtual realities; H.5.2. [User Interfaces]: Graphical user interfaces, Input devices and strategies; I.3.6 [Methodology and Techniques]: Interaction techniques

    SpaceTop: integrating 2D and spatial 3D interactions in a see-through desktop environment

    Get PDF
    SpaceTop is a concept that fuses spatial 2D and 3D interactions in a single workspace. It extends the traditional desktop interface with interaction technology and visualization techniques that enable seamless transitions between 2D and 3D manipulations. SpaceTop allows users to type, click, draw in 2D, and directly manipulate interface elements that float in the 3D space above the keyboard. It makes it possible to easily switch from one modality to another, or to simultaneously use two modalities with different hands. We introduce hardware and software configurations for co-locating these various interaction modalities in a unified workspace using depth cameras and a transparent display. We describe new interaction and visualization techniques that allow users to interact with 2D elements floating in 3D space. We present the results from a preliminary user study that indicates the benefit of such hybrid workspaces

    inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation

    Get PDF
    Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.National Science Foundation (U.S.). Graduate Research Fellowship (Grant 1122374)Swedish Research Council (Fellowship)Blanceflor Foundation (Scholarship

    Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration

    Get PDF
    We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system.National Science Foundation (U.S.). Graduate Research Fellowship Program (Grant No. 1122374

    Sublimate: State-Changing Virtual and Physical Rendering to Augment Interaction with Shape Displays

    Get PDF
    Recent research in 3D user interfaces pushes towards immersive graphics and actuated shape displays. Our work explores the hybrid of these directions, and we introduce sublimation and deposition, as metaphors for the transitions between physical and virtual states. We discuss how digital models, handles and controls can be interacted with as virtual 3D graphics or dynamic physical shapes, and how user interfaces can rapidly and fluidly switch between those representations. To explore this space, we developed two systems that integrate actuated shape displays and augmented reality (AR) for co-located physical shapes and 3D graphics. Our spatial optical see-through display provides a single user with head-tracked stereoscopic augmentation, whereas our handheld devices enable multi-user interaction through video seethrough AR. We describe interaction techniques and applications that explore 3D interaction for these new modalities. We conclude by discussing the results from a user study that show how freehand interaction with physical shape displays and co-located graphics can outperform wand-based interaction with virtual 3D graphics.National Science Foundation (U.S.) (Graduate Research Fellowship Grant 1122374

    Unobtrusive Augmentation  of Physical Environments : Interaction Techniques, Spatial Displays and Ubiquitous Sensing

    No full text
    The fundamental idea of Augmented Reality (AR) is to improve and enhance our perception of the surroundings, through the use of sensing, computing and display systems that make it possible to augment the physical environment with virtual computer graphics. AR is, however, often associated with user-worn equipment, whose current complexity and lack of comfort limit its applicability in many scenarios. The goal of this work has been to develop systems and techniques for uncomplicated AR experiences that support sporadic and spontaneous interaction with minimal preparation on the user’s part. This dissertation defines a new concept, Unobtrusive AR, which emphasizes an optically direct view of a visually unaltered physical environment, the avoidance of user-worn technology, and the preference for unencumbering techniques. The first part of the work focuses on the design and development of two new AR display systems. They illustrate how AR experiences can be achieved through transparent see-through displays that are positioned in front of the physical environment to be augmented. The second part presents two novel sensing techniques for AR, which employ an instrumented surface for unobtrusive tracking of active and passive objects. These techniques have no visible sensing technology or markers, and are suitable for deployment in scenarios where it is important to maintain the visual qualities of the real environment. The third part of the work discusses a set of new interaction techniques for spatially aware handheld displays, public 3D displays, touch screens, and immaterial displays (which are not constrained by solid surfaces or enclosures). Many of the techniques are also applicable to human-computer interaction in general, as indicated by the accompanying qualitative and quantitative insights from user evaluations. The thesis contributes a set of novel display systems, sensing technologies, and interaction techniques to the field of human-computer interaction, and brings new perspectives to the enhancement of real environments through computer graphics.QC 20100805</p

    Interaction techniques using prosodic features of speech and audio localization

    No full text
    We describe several approaches for using prosodic features of speech and audio localization to control interactive applications. This information can be applied to parameter control, as well as to speech disambiguation. We discuss how characteristics of spoken sentences can be exploited in the user interface; for example, by considering the speed with which a sentence is spoken and the presence of extraneous utterances. We also show how coarse audio localization can be used for low-fidelity gesture tracking, by inferring the speaker&apos;s head position
    corecore